Traffic flow prediction is an important part of smart transportation. The goal is to predict future traffic conditions based on historical data recorded by sensors and the traffic network. As the city continues to build, parts of the transportation network will be added or modified. How to accurately predict expanding and evolving long-term streaming networks is of great significance. To this end, we propose a new simulation-based criterion that considers teaching autonomous agents to mimic sensor patterns, planning their next visit based on the sensor's profile (e.g., traffic, speed, occupancy). The data recorded by the sensor is most accurate when the agent can perfectly simulate the sensor's activity pattern. We propose to formulate the problem as a continuous reinforcement learning task, where the agent is the next flow value predictor, the action is the next time-series flow value in the sensor, and the environment state is a dynamically fused representation of the sensor and transportation network. Actions taken by the agent change the environment, which in turn forces the agent's mode to update, while the agent further explores changes in the dynamic traffic network, which helps the agent predict its next visit more accurately. Therefore, we develop a strategy in which sensors and traffic networks update each other and incorporate temporal context to quantify state representations evolving over time.
translated by 谷歌翻译
Dense retrieval aims to map queries and passages into low-dimensional vector space for efficient similarity measuring, showing promising effectiveness in various large-scale retrieval tasks. Since most existing methods commonly adopt pre-trained Transformers (e.g. BERT) for parameter initialization, some work focuses on proposing new pre-training tasks for compressing the useful semantic information from passages into dense vectors, achieving remarkable performances. However, it is still challenging to effectively capture the rich semantic information and relations about passages into the dense vectors via one single particular pre-training task. In this work, we propose a multi-task pre-trained model, MASTER, that unifies and integrates multiple pre-training tasks with different learning objectives under the bottlenecked masked autoencoder architecture. Concretely, MASTER utilizes a multi-decoder architecture to integrate three types of pre-training tasks: corrupted passages recovering, related passage recovering and PLMs outputs recovering. By incorporating a shared deep encoder, we construct a representation bottleneck in our architecture, compressing the abundant semantic information across tasks into dense vectors. The first two types of tasks concentrate on capturing the semantic information of passages and relationships among them within the pre-training corpus. The third one can capture the knowledge beyond the corpus from external PLMs (e.g. GPT-2). Extensive experiments on several large-scale passage retrieval datasets have shown that our approach outperforms the previous state-of-the-art dense retrieval methods. Our code and data are publicly released in https://github.com/microsoft/SimXNS
translated by 谷歌翻译
Knowledge distillation is often used to transfer knowledge from a strong teacher model to a relatively weak student model. Traditional knowledge distillation methods include response-based methods and feature-based methods. Response-based methods are used the most widely but suffer from lower upper limit of model performance, while feature-based methods have constraints on the vocabularies and tokenizers. In this paper, we propose a tokenizer-free method liberal feature-based distillation (LEAD). LEAD aligns the distribution between teacher model and student model, which is effective, extendable, portable and has no requirements on vocabularies, tokenizer, or model architecture. Extensive experiments show the effectiveness of LEAD on several widely-used benchmarks, including MS MARCO Passage, TREC Passage 19, TREC Passage 20, MS MARCO Document, TREC Document 19 and TREC Document 20.
translated by 谷歌翻译
Multi-task learning (MTL) models have demonstrated impressive results in computer vision, natural language processing, and recommender systems. Even though many approaches have been proposed, how well these approaches balance different tasks on each parameter still remains unclear. In this paper, we propose to measure the task dominance degree of a parameter by the total updates of each task on this parameter. Specifically, we compute the total updates by the exponentially decaying Average of the squared Updates (AU) on a parameter from the corresponding task.Based on this novel metric, we observe that many parameters in existing MTL methods, especially those in the higher shared layers, are still dominated by one or several tasks. The dominance of AU is mainly due to the dominance of accumulative gradients from one or several tasks. Motivated by this, we propose a Task-wise Adaptive learning rate approach, AdaTask in short, to separate the \emph{accumulative gradients} and hence the learning rate of each task for each parameter in adaptive learning rate approaches (e.g., AdaGrad, RMSProp, and Adam). Comprehensive experiments on computer vision and recommender system MTL datasets demonstrate that AdaTask significantly improves the performance of dominated tasks, resulting SOTA average task-wise performance. Analysis on both synthetic and real-world datasets shows AdaTask balance parameters in every shared layer well.
translated by 谷歌翻译
We present a novel neural surface reconstruction method called NeuralRoom for reconstructing room-sized indoor scenes directly from a set of 2D images. Recently, implicit neural representations have become a promising way to reconstruct surfaces from multiview images due to their high-quality results and simplicity. However, implicit neural representations usually cannot reconstruct indoor scenes well because they suffer severe shape-radiance ambiguity. We assume that the indoor scene consists of texture-rich and flat texture-less regions. In texture-rich regions, the multiview stereo can obtain accurate results. In the flat area, normal estimation networks usually obtain a good normal estimation. Based on the above observations, we reduce the possible spatial variation range of implicit neural surfaces by reliable geometric priors to alleviate shape-radiance ambiguity. Specifically, we use multiview stereo results to limit the NeuralRoom optimization space and then use reliable geometric priors to guide NeuralRoom training. Then the NeuralRoom would produce a neural scene representation that can render an image consistent with the input training images. In addition, we propose a smoothing method called perturbation-residual restrictions to improve the accuracy and completeness of the flat region, which assumes that the sampling points in a local surface should have the same normal and similar distance to the observation center. Experiments on the ScanNet dataset show that our method can reconstruct the texture-less area of indoor scenes while maintaining the accuracy of detail. We also apply NeuralRoom to more advanced multiview reconstruction algorithms and significantly improve their reconstruction quality.
translated by 谷歌翻译
大规模发光点云的快速有效语义分割是自主驾驶中的一个基本问题。为了实现这一目标,现有的基于点的方法主要选择采用随机抽样策略来处理大规模点云。但是,我们的数量和定性研究发现,随机抽样可能不适合自主驾驶场景,因为LiDAR点遵循整个空间的不均匀甚至长尾巴分布,这阻止了模型从从中捕获足够的信息,从而从中捕获了足够的信息不同的距离范围并降低了模型的学习能力。为了减轻这个问题,我们提出了一种新的极性缸平衡的随机抽样方法,该方法使下采样的点云能够保持更平衡的分布并改善不同空间分布下的分割性能。此外,引入了采样一致性损失,以进一步提高分割性能并降低模型在不同采样方法下的方差。广泛的实验证实,我们的方法在Semantickitti和Semanticposs基准测试中都产生了出色的性能,分别提高了2.8%和4.0%。
translated by 谷歌翻译
知识蒸馏是将知识从强大的教师转移到有效的学生模型的有效方法。理想情况下,我们希望老师越好,学生越好。但是,这种期望并不总是成真。通常,由于教师和学生之间的不可忽略的差距,更好的教师模型通过蒸馏导致不良学生。为了弥合差距,我们提出了一种渐进式蒸馏方法,以进行致密检索。产品由教师渐进式蒸馏和数据进行渐进的蒸馏组成,以逐步改善学生。我们对五个广泛使用的基准,MARCO通道,TREC Passage 19,TREC文档19,MARCO文档和自然问题进行了广泛的实验,其中POD在蒸馏方法中实现了密集检索的最新方法。代码和模型将发布。
translated by 谷歌翻译
合作感知的想法是从多辆车之间的共同感知数据中受益,并克服单车上车载传感器的局限性。但是,由于本地化不准确,通信带宽和模棱两可的融合,多车信息的融合仍然具有挑战性。过去的实践通过放置精确的GNSS定位系统来简化问题,手动指定连接的车辆数量并确定融合策略。本文提出了一个基于地图的合作感​​知框架,名为MAP容器,以提高合作感的准确性和鲁棒性,最终克服了这个问题。概念“地图容器”表示地图是将所有信息转换为地图坐标空间的平台,并将不同的信息源合并到分布式融合体系结构中。在拟议的MAP容器中,考虑使用GNSS信号和传感器功能和地图功能之间的匹配关系以优化环境状态的估计。对仿真数据集和房地车平台的评估结果验证了所提出的方法的有效性。
translated by 谷歌翻译
旨在使用非常有限的样本识别看不见的类的几个射击分类吸引了越来越多的关注。通常,它被称为公制学习问题。几乎没有射击分类的核心问题是如何学习(1)支持和查询集中图像的一致表示以及(2)在支持和查询集之间的图像的有效度量学习。在本文中,我们表明,这两个挑战可以通过统一的查询支持变压器(QSFormer)模型同时建模。具体而言,提出的QSFormer涉及全局查询支持样品变压器(SampleFormer)分支和局部补丁变压器(PatchFormer)学习分支。 SampleFormer旨在捕获样品在支持和查询集以进行图像表示方面的依赖性。它采用编码器,解码器和交叉注意力,分别对几个射击分类任务的支持,查询(图像)表示和度量学习进行建模。同样,作为全球学习分支的补充,我们采用了局部贴片变压器,通过捕获本地图像贴片的长距离依赖性来提取每个图像样本的结构表示。此外,还提出了一种新型的跨尺度交互式提取器(CIFE)来提取和融合多尺度CNN特征,作为建议的少量学习方法的有效骨干模块。所有模块都集成到统一的框架中,并以端到端的方式进行了训练。在四个流行数据集上进行的广泛实验证明了所提出的QSFormer的有效性和优势。
translated by 谷歌翻译
已经在生物大脑的皮层中观察到了侧向抑制连接,并且已经在其在认知功能中的作用进行了广泛的研究。但是,在深度学习中的香草版本反向传播中,所有梯度(可以理解为信号和噪声梯度)在重量更新过程中流过网络。这可能导致过度拟合。在这项工作中,受到生物横向抑制的启发,我们提出了梯度面膜,该面膜在反向传播过程中有效地滤除了噪声梯度。这使学习的功能信息可以更强烈地存储在网络中,同时滤除嘈杂或不重要的功能。此外,我们在分析上证明了人工神经网络中的横向抑制如何提高传播梯度的质量。提出了一个新的梯度质量标准,该标准可以用作训练各种卷积神经网络(CNN)的措施。最后,我们进行了几个不同的实验,以研究梯度掩模如何定量和定性地改善网络的性能。定量地,原始CNN体系结构的准确性,修剪后的准确性以及对抗攻击后的准确性已显示出改善。从定性上讲,使用梯度掩模训练的CNN开发了显着图,主要集中在感兴趣的对象上,这对于数据增强和网络解释性很有用。
translated by 谷歌翻译